The distributed representation of symbols is one of the key technologies in machine learning systems today, playing a pivotal role in modern natural language processing. Traditional word embeddings associate a separate vector with each word. While this approach is simple and leads to good performance, it requires a lot of memory for representing a large vocabulary. To reduce the memory footprint, the default embedding layer in spaCy is a hash embeddings layer. It is a stochastic approximation of traditional embeddings that provides unique vectors for a large number of words without explicitly storing a separate vector for each of them. To be able to compute meaningful representations for both known and unknown words, hash embeddings represent each word as a summary of the normalized word form, subword information and word shape. Together, these features produce a multi-embedding of a word. In this technical report we lay out a bit of history and introduce the embedding methods in spaCy in detail. Second, we critically evaluate the hash embedding architecture with multi-embeddings on Named Entity Recognition datasets from a variety of domains and languages. The experiments validate most key design choices behind spaCy's embedders, but we also uncover a few surprising results.
translated by 谷歌翻译
原型NLP实验训练了标记为英语数据的标准体系结构,并优化了准确性,而无需考虑其他方面,例如公平,解释性或计算效率。我们通过最近对NLP研究论文的手动分类表明,确实是这种情况,并将其称为正方形的实验设置。我们观察到,NLP研究通常超出了一个平方的设置,例如,不仅关注准确性,而且关注公平或解释性,而且通常仅沿着单个维度。例如,针对多语言的大多数工作仅考虑准确性;大多数关于公平或解释性的工作仅考虑英语;等等。我们通过对最近的NLP研究论文和ACL测试奖励获得者的手动分类来展示此信息。大多数研究的这种一维意味着我们只探索NLP研究搜索空间的一部分。我们提供了一个历史和最新示例,说明了一个偏见如何导致研究人员得出错误的结论或做出不明智的选择,指出了在研究歧管上有希望但未开发的方向,并提出实用的建议以实现更多的多维研究。我们打开注释的结果,以启用https://github.com/google-research/url-nlp的进一步分析
translated by 谷歌翻译
我们的目标是学习不容易获得大量数据的克里奥尔语言的语言模型,因此探索了祖先语言的潜在转移(“祖先转移假设”)。我们发现标准转移方法不会促进祖先转移。令人惊讶的是,与其他非凝聚语言不同,克里奥尔语出现了一种非常独特的两相模式:随着我们的训练损失高原,语言模型开始过分贴在其源语言上,克里奥尔语对克里奥尔语的困惑下降。我们探讨了这个压缩阶段是否可以导致实际上有用的语言模型(“祖先瓶颈假说”),但也会伪造这一点。此外,我们表明,即使在随机,无关的语言训练时,克里奥尔人甚至表现出这种两相模式。因此,Creoles似乎是类型的异常值,我们推测两个观察结果之间是否存在联系。
translated by 谷歌翻译
通过首先通过自动语音识别(ASR)转换话语,然后将输出馈送到基于文本的模型,通常通过转录语言理解(SLU)任务来解决。自我监督代表学习的最新进展旨在改善ASR组件。我们调查了是否对演讲的代表性学习已经成熟,以取代SLU中的ASR。我们将学位语音特征与Wav2Vec 2.0,最先进的ASR成绩单以及基于新型语音的名称实体识别任务的输入,是真实世界紧急呼叫和两个基于语音的命名实体识别任务的输入。现有的SLU基准。我们表明,学习的语音功能优于三种分类任务的ASR成绩单。对于机器翻译,ASR成绩单仍然是更好的选择。我们突出了Wav2VEC 2.0表示的内在稳健性,以失控的单词作为更好的性能的关键。
translated by 谷歌翻译
最近已经提出了用于查找测试时间决定的有影响力的训练示例的基于实例的可解释方法,包括影响函数,Tracein,代表点选择,渐变点和毕业-CO。通常,这些方法使用LOO影响(厨师距离)作为金标准,或使用各种启发式来评估。在本文中,我们表明,所有上述方法都不是不稳定的,即对初始化,训练数据的排序非常敏感,以及批量大小。我们建议这是文学中如何的自然后果,假设示例的影响与模型状态和其他示例无关 - 并且争论不是。我们表明LOO影响力和启发式是衡量基于实例的解释的质量的糟糕的指标,而是通过他们检测中毒攻击的能力来评估这些解释。此外,我们提供了一种简单但有效的基线,以改善所有上述方法,并展示如何在下游任务上产生非常显着的改进。
translated by 谷歌翻译
Recent successes of massively overparameterized models have inspired a new line of work investigating the underlying conditions that enable overparameterized models to generalize well. This paper considers a framework where the possibly overparametrized model includes fake features, i.e., features that are present in the model but not in the data. We present a non-asymptotic high-probability bound on the generalization error of the ridge regression problem under the model misspecification of having fake features. Our high-probability results characterize the interplay between the implicit regularization provided by the fake features and the explicit regularization provided by the ridge parameter. We observe that fake features may improve the generalization error, even though they are irrelevant to the data.
translated by 谷歌翻译
We focus on the continual learning problem where the tasks arrive sequentially and the aim is to perform well on the newly arrived task without performance degradation on the previously seen tasks. In contrast to the continual learning literature focusing on the centralized setting, we investigate the distributed estimation framework. We consider the well-established distributed learning algorithm \cocoa{}. We derive closed form expressions for the iterations for the overparametrized case. We illustrate the convergence and the error performance of the algorithm based on the over/under-parametrization of the problem. Our results show that depending on the problem dimensions and data generation assumptions, \cocoa{} can perform continual learning over a sequence of tasks, i.e., it can learn a new task without forgetting previously learned tasks, with access only to one task at a time.
translated by 谷歌翻译
Using 3D CNNs on high resolution medical volumes is very computationally demanding, especially for large datasets like the UK Biobank which aims to scan 100,000 subjects. Here we demonstrate that using 2D CNNs on a few 2D projections (representing mean and standard deviation across axial, sagittal and coronal slices) of the 3D volumes leads to reasonable test accuracy when predicting the age from brain volumes. Using our approach, one training epoch with 20,324 subjects takes 40 - 70 seconds using a single GPU, which is almost 100 times faster compared to a small 3D CNN. These results are important for researchers who do not have access to expensive GPU hardware for 3D CNNs.
translated by 谷歌翻译
Large annotated datasets are required to train segmentation networks. In medical imaging, it is often difficult, time consuming and expensive to create such datasets, and it may also be difficult to share these datasets with other researchers. Different AI models can today generate very realistic synthetic images, which can potentially be openly shared as they do not belong to specific persons. However, recent work has shown that using synthetic images for training deep networks often leads to worse performance compared to using real images. Here we demonstrate that using synthetic images and annotations from an ensemble of 10 GANs, instead of from a single GAN, increases the Dice score on real test images with 4.7 % to 14.0 % on specific classes.
translated by 谷歌翻译
Recent work shows that the expressive power of Graph Neural Networks (GNNs) in distinguishing non-isomorphic graphs is exactly the same as that of the Weisfeiler-Lehman (WL) graph test. In particular, they show that the WL test can be simulated by GNNs. However, those simulations involve neural networks for the 'combine' function of size polynomial or even exponential in the number of graph nodes $n$, as well as feature vectors of length linear in $n$. We present an improved simulation of the WL test on GNNs with \emph{exponentially} lower complexity. In particular, the neural network implementing the combine function in each node has only a polylogarithmic number of parameters in $n$, and the feature vectors exchanged by the nodes of GNN consists of only $O(\log n)$ bits. We also give logarithmic lower bounds for the feature vector length and the size of the neural networks, showing the (near)-optimality of our construction.
translated by 谷歌翻译